144 research outputs found

    The right to enjoy the benefits of scientific progress: in search of state obligations in relation to health

    Get PDF
    After having received little attention over the past decades, one of the least known human rights—the right to enjoy the benefits of scientific progress and its applications—has had its dust blown off. Although included in the Universal Declaration of Human Rights (UDHR) and in the International Covenant on Economic, Social and Cultural Rights (ICESCR)—be it at the very end of both instruments -this right hardly received any attention from States, UN bodies and programmes and academics. The role of science in societies and its benefits and potential danger were discussed in various international fora, but hardly ever in a human rights context. Nowadays, within a world that is increasingly turning to science and technology for solutions to persistent socio-economic and development problems, the human dimension of science also receives increased attention, including the human right to enjoy the benefits of scientific progress and its applications. This contribution analyses the possible legal obligations of States in relation to the right to enjoy the benefits of scientific progress and its applications, in particular as regards health

    A review of RCTs in four medical journals to assess the use of imputation to overcome missing data in quality of life outcomes

    Get PDF
    Background: Randomised controlled trials (RCTs) are perceived as the gold-standard method for evaluating healthcare interventions, and increasingly include quality of life (QoL) measures. The observed results are susceptible to bias if a substantial proportion of outcome data are missing. The review aimed to determine whether imputation was used to deal with missing QoL outcomes. Methods: A random selection of 285 RCTs published during 2005/6 in the British Medical Journal, Lancet, New England Journal of Medicine and Journal of American Medical Association were identified. Results: QoL outcomes were reported in 61 (21%) trials. Six (10%) reported having no missing data, 20 (33%) reported ≤ 10% missing, eleven (18%) 11%–20% missing, and eleven (18%) reported >20% missing. Missingness was unclear in 13 (21%). Missing data were imputed in 19 (31%) of the 61 trials. Imputation was part of the primary analysis in 13 trials, but a sensitivity analysis in six. Last value carried forward was used in 12 trials and multiple imputation in two. Following imputation, the most common analysis method was analysis of covariance (10 trials). Conclusion: The majority of studies did not impute missing data and carried out a complete-case analysis. For those studies that did impute missing data, researchers tended to prefer simpler methods of imputation, despite more sophisticated methods being available.The Health Services Research Unit is funded by the Chief Scientist Office of the Scottish Government Health Directorate. Shona Fielding is also currently funded by the Chief Scientist Office on a Research Training Fellowship (CZF/1/31)

    A Kernel to Exploit Informative Missingness in Multivariate Time Series from EHRs

    Get PDF
    A large fraction of the electronic health records (EHRs) consists of clinical measurements collected over time, such as lab tests and vital signs, which provide important information about a patient's health status. These sequences of clinical measurements are naturally represented as time series, characterized by multiple variables and large amounts of missing data, which complicate the analysis. In this work, we propose a novel kernel which is capable of exploiting both the information from the observed values as well the information hidden in the missing patterns in multivariate time series (MTS) originating e.g. from EHRs. The kernel, called TCKIM_{IM}, is designed using an ensemble learning strategy in which the base models are novel mixed mode Bayesian mixture models which can effectively exploit informative missingness without having to resort to imputation methods. Moreover, the ensemble approach ensures robustness to hyperparameters and therefore TCKIM_{IM} is particularly well suited if there is a lack of labels - a known challenge in medical applications. Experiments on three real-world clinical datasets demonstrate the effectiveness of the proposed kernel.Comment: 2020 International Workshop on Health Intelligence, AAAI-20. arXiv admin note: text overlap with arXiv:1907.0525

    Comparison of methods for handling missing data on immunohistochemical markers in survival analysis of breast cancer

    Get PDF
    Background:Tissue micro-arrays (TMAs) are increasingly used to generate data of the molecular phenotype of tumours in clinical epidemiology studies, such as studies of disease prognosis. However, TMA data are particularly prone to missingness. A variety of methods to deal with missing data are available. However, the validity of the various approaches is dependent on the structure of the missing data and there are few empirical studies dealing with missing data from molecular pathology. The purpose of this study was to investigate the results of four commonly used approaches to handling missing data from a large, multi-centre study of the molecular pathological determinants of prognosis in breast cancer.Patients and Methods:We pooled data from over 11 000 cases of invasive breast cancer from five studies that collected information on seven prognostic indicators together with survival time data. We compared the results of a multi-variate Cox regression using four approaches to handling missing data-complete case analysis (CCA), mean substitution (MS) and multiple imputation without inclusion of the outcome (MI) and multiple imputation with inclusion of the outcome (MI). We also performed an analysis in which missing data were simulated under different assumptions and the results of the four methods were compared.Results:Over half the cases had missing data on at least one of the seven variables and 11 percent had missing data on 4 or more. The multi-variate hazard ratio estimates based on multiple imputation models were very similar to those derived after using MS, with similar standard errors. Hazard ratio estimates based on the CCA were only slightly different, but the estimates were less precise as the standard errors were large. However, in data simulated to be missing completely at random (MCAR) or missing at random (MAR), estimates for MI were least biased and most accurate, whereas estimates for CCA were most biased and least accurate.Conclusion:In this study, empirical results from analyses using CCA, MS, MI and MI were similar, although results from CCA were less precise. The results from simulations suggest that in general MI is likely to be the best. Given the ease of implementing MI in standard statistical software, the results of MI and CCA should be compared in any multi-variate analysis where missing data are a problem. © 2011 Cancer Research UK. All rights reserved

    Statistical Methodological Issues in Handling of Fatty Acid Data: Percentage or Concentration, Imputation and Indices

    Get PDF
    Basic aspects in the handling of fatty acid-data have remained largely underexposed. Of these, we aimed to address three statistical methodological issues, by quantitatively exemplifying their imminent confounding impact on analytical outcomes: (1) presenting results as relative percentages or absolute concentrations, (2) handling of missing/non-detectable values, and (3) using structural indices for data-reduction. Therefore, we reanalyzed an example dataset containing erythrocyte fatty acid-concentrations of 137 recurrently depressed patients and 73 controls. First, correlations between data presented as percentages and concentrations varied for different fatty acids, depending on their correlation with the total fatty acid-concentration. Second, multiple imputation of non-detects resulted in differences in significance compared to zero-substitution or omission of non-detects. Third, patients’ chain length-, unsaturation-, and peroxidation-indices were significantly lower compared to controls, which corresponded with patterns interpreted from individual fatty acid tests. In conclusion, results from our example dataset show that statistical methodological choices can have a significant influence on outcomes of fatty acid analysis, which emphasizes the relevance of: (1) hypothesis-based fatty acid-presentation (percentages or concentrations), (2) multiple imputation, preventing bias introduced by non-detects; and (3) the possibility of using (structural) indices, to delineate fatty acid-patterns thereby preventing multiple testing

    Failure of available scoring systems to predict ongoing infection in patients with abdominal sepsis after their initial emergency laparotomy

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>To examine commonly used scoring systems, designed to predict overall outcome in critically ill patients, for their ability to select patients with an abdominal sepsis that have ongoing infection needing relaparotomy.</p> <p>Methods</p> <p>Data from a RCT comparing two surgical strategies was used. The study population consisted of 221 patients at risk for ongoing abdominal infection. The following scoring systems were evaluated with logistic regression analysis for their ability to select patients requiring a relaparotomy: APACHE-II score, SAPS-II, Mannheim Peritonitis Index (MPI), MODS, SOFA score, and the acute part of the APACHE-II score (APS).</p> <p>Results</p> <p>The proportion of patients requiring a relaparotomy was 32% (71/221). Only 2 scores had a discriminatory ability in identifying patients with ongoing infection needing relaparotomy above chance: the APS on day 1 (AUC 0.61; 95%CI 0.52-0.69) and the SOFA score on day 2 (AUC 0.60; 95%CI 0.52-0.69). However, to correctly identify 90% of all patients needing a relaparotomy would require such a low cut-off value that around 80% of all patients identified by these scoring systems would have negative findings at relaparotomy.</p> <p>Conclusions</p> <p>None of the widely-used scoring systems to predict overall outcome in critically ill patients are of clinical value for the identification of patients with ongoing infection needing relaparotomy. There is a need to develop more specific tools to assist physicians in their daily monitoring and selection of these patients after the initial emergency laparotomy.</p> <p>Trial registration number</p> <p>ISRCTN: <a href="http://www.controlled-trials.com/ISRCTN 51729393">ISRCTN 51729393</a></p

    A computational model of perception and action for cognitive robotics

    Get PDF
    Robots are increasingly expected to perform tasks in complex environments. To this end, engineers provide them with processing architectures that are based on models of human information processing. In contrast to traditional models, where information processing is typically set up in stages (i.e., from perception to cognition to action), it is increasingly acknowledged by psychologists and robot engineers that perception and action are parts of an interactive and integrated process. In this paper, we present HiTEC, a novel computational (cognitive) model that allows for direct interaction between perception and action as well as for cognitive control, demonstrated by task-related attentional influences. Simulation results show that key behavioral studies can be readily replicated. Three processing aspects of HiTEC are stressed for their importance for cognitive robotics: (1) ideomotor learning of action control, (2) the influence of task context and attention on perception, action planning, and learning, and (3) the interaction between perception and action planning. Implications for the design of cognitive robotics are discussed

    Comparison of CATs, CURB-65 and PMEWS as Triage Tools in Pandemic Influenza Admissions to UK Hospitals: Case Control Analysis Using Retrospective Data

    Get PDF
    Triage tools have an important role in pandemics to identify those most likely to benefit from higher levels of care. We compared Community Assessment Tools (CATs), the CURB-65 score, and the Pandemic Medical Early Warning Score (PMEWS); to predict higher levels of care (high dependency - Level 2 or intensive care - Level 3) and/or death in patients at or shortly after admission to hospital with A/H1N1 2009 pandemic influenza. This was a case-control analysis using retrospectively collected data from the FLU-CIN cohort (1040 adults, 480 children) with PCR-confirmed A/H1N1 2009 influenza. Area under receiver operator curves (AUROC), sensitivity, specificity, positive predictive values and negative predictive values were calculated. CATs best predicted Level 2/3 admissions in both adults [AUROC (95% CI): CATs 0.77 (0.73, 0.80); CURB-65 0.68 (0.64, 0.72); PMEWS 0.68 (0.64, 0.73), p<0.001] and children [AUROC: CATs 0.74 (0.68, 0.80); CURB-65 0.52 (0.46, 0.59); PMEWS 0.69 (0.62, 0.75), p<0.001]. CURB-65 and CATs were similar in predicting death in adults with both performing better than PMEWS; and CATs best predicted death in children. CATs were the best predictor of Level 2/3 care and/or death for both adults and children. CATs are potentially useful triage tools for predicting need for higher levels of care and/or mortality in patients of all ages
    corecore